本文解决了对预先训练的深神经网络进行排名并筛选最下游任务的重要问题。这是具有挑战性的,因为每个任务的基本模型排名只能通过微调目标数据集中的预训练模型来生成,该模型是蛮力且计算昂贵的。最近的高级方法提出了几个轻巧的可转移性指标来预测微调结果。但是,这些方法仅捕获静态表示,但忽略了微调动态。为此,本文提出了一个新的可传递性度量,称为\ textbf {s} elf-challenging \ textbf {f} isher \ textbf {d} is Criminant \ textbf {a} nalisy(\ textbf {\ textbf {sfda})现有作品没有的有吸引力的好处。首先,SFDA可以将静态特征嵌入渔民空间中,并完善它们,以在类之间更好地分离性。其次,SFDA使用一种自我挑战的机制来鼓励不同的预训练模型来区分硬性示例。第三,SFDA可以轻松地为模型集合选择多个预训练的模型。 $ 33 $预培训的$ 11 $下游任务的$ 33 $预培训模型的广泛实验表明,在测量预训练模型的可传递性时,SFDA具有高效,有效和健壮。例如,与最先进的方法NLEEP相比,SFDA平均显示了59.1美元的增益,同时带来了$ 22.5 $ x的墙壁速度速度。该代码将在\ url {https://github.com/tencentarc/sfda}上提供。
translated by 谷歌翻译
室内场景云的无监督对比学习取得了巨大的成功。但是,室外场景中无监督的学习点云仍然充满挑战,因为以前的方法需要重建整个场景并捕获对比度目标的部分视图。这在带有移动物体,障碍物和传感器的室外场景中是不可行的。在本文中,我们提出了CO^3,即合作对比度学习和上下文形状的预测,以无监督的方式学习3D表示室外景点云。与现有方法相比,Co^3具有几种优点。 (1)它利用了从车辆侧和基础架构侧来的激光点云来构建差异,但同时维护对比度学习的通用语义信息,这比以前的方法构建的视图更合适。 (2)在对比度目标的同时,提出了形状上下文预测作为预训练目标,并为无监督的3D点云表示学习带来了更多与任务相关的信息,这在将学习的表示形式转移到下游检测任务时是有益的。 (3)与以前的方法相比,CO^3学到的表示形式可以通过不同类型的LIDAR传感器收集到不同的室外场景数据集。 (4)CO^3将一次和Kitti数据集的当前最新方法提高到2.58地图。代码和模型将发布。我们认为Co^3将有助于了解室外场景中的LiDar Point云。
translated by 谷歌翻译
视觉变压器(VIV)及其变体(例如,Swin,PVT)在各种计算机视觉任务中取得了巨大的成功,这是由于他们学习远程语境信息的能力。层标准化(LN)是这些模型中的必要成分。然而,我们发现普通LN在不同位置处的令牌幅度,因为它标准化每个令牌内的嵌入物。变压器难以捕获诱导偏压,例如用LN的图像中的位置上下文。我们通过提出新的标准化器,称为动态令牌归一化(DTN)来解决这个问题,其中归一化在每个令牌(令牌)和跨不同的标记(令牌互补)中执行归一化。 DTN有几个优点。首先,它基于统一的制定,因此可以代表各种现有的归一化方法。其次,DTN学习在令牌内部和令牌间的互联网上标准化令牌,使变换器能够捕获全局上下文信息和本地位置上下文。 {第三,通过简单地更换LN层,DTN可以容易地插入各种视觉变压器,例如VIT,SWIN,PVT,Levit,T2T-VIT,BIGBIRD和REPLERER。广泛的实验表明,配备DTN的变压器始终如一地优于基线模型,具有最小的额外参数和计算开销。例如,DTN优于0.5 \%$ 0.5 \%$ - $ 1.2 \%$ 1.2 \%$ top-1在Imagenet上的准确性,超过1.2 $ - $ 1.4 $ box ap在Coco基准测试的对象检测中,达到2.3 \%$ - $ 3.9 \%$ mce在ImageNet-C上的鲁棒性实验,在远程竞技场上长浪列表中的0.5 \%$ 0.8 \%$ 0.8 \%。}代码将在\ url {https://github.com/wqshao126/dtn}公开。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, existing methods developed for this purpose are typically discriminative, computing features of local subgraphs around two neighboring nodes and predicting potential links between them from the perspective of subgraph classification. In this formalism, the selection of enclosing subgraphs and heuristic structural features for subgraph classification significantly affects the performance of the methods. To overcome this limitation, this paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP. Instead of sampling positive and negative links and heuristically computing the features of their enclosing subgraphs, GraphLP utilizes the feature learning ability of deep-learning models to automatically extract the structural patterns of graphs for link prediction under the assumption that real-world graphs are not locally isolated. Moreover, GraphLP explores high-order connectivity patterns to utilize the hierarchical organizational structures of graphs for link prediction. Our experimental results on all common benchmark datasets from different applications demonstrate that the proposed method consistently outperforms other state-of-the-art methods. Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
translated by 谷歌翻译
Human Activity Recognition (HAR) is one of the core research areas in mobile and wearable computing. With the application of deep learning (DL) techniques such as CNN, recognizing periodic or static activities (e.g, walking, lying, cycling, etc.) has become a well studied problem. What remains a major challenge though is the sporadic activity recognition (SAR) problem, where activities of interest tend to be non periodic, and occur less frequently when compared with the often large amount of irrelevant background activities. Recent works suggested that sequential DL models (such as LSTMs) have great potential for modeling nonperiodic behaviours, and in this paper we studied some LSTM training strategies for SAR. Specifically, we proposed two simple yet effective LSTM variants, namely delay model and inverse model, for two SAR scenarios (with and without time critical requirement). For time critical SAR, the delay model can effectively exploit predefined delay intervals (within tolerance) in form of contextual information for improved performance. For regular SAR task, the second proposed, inverse model can learn patterns from the time series in an inverse manner, which can be complementary to the forward model (i.e.,LSTM), and combining both can boost the performance. These two LSTM variants are very practical, and they can be deemed as training strategies without alteration of the LSTM fundamentals. We also studied some additional LSTM training strategies, which can further improve the accuracy. We evaluated our models on two SAR and one non-SAR datasets, and the promising results demonstrated the effectiveness of our approaches in HAR applications.
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译
Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
Occupancy information is useful for efficient energy management in the building sector. The massive high-resolution electrical power consumption data collected by smart meters in the advanced metering infrastructure (AMI) network make it possible to infer buildings' occupancy status in a non-intrusive way. In this paper, we propose a deep leaning model called ABODE-Net which employs a novel Parallel Attention (PA) block for building occupancy detection using smart meter data. The PA block combines the temporal, variable, and channel attention modules in a parallel way to signify important features for occupancy detection. We adopt two smart meter datasets widely used for building occupancy detection in our performance evaluation. A set of state-of-the-art shallow machine learning and deep learning models are included for performance comparison. The results show that ABODE-Net significantly outperforms other models in all experimental cases, which proves its validity as a solution for non-intrusive building occupancy detection.
translated by 谷歌翻译
Despite recent progress towards scaling up multimodal vision-language models, these models are still known to struggle on compositional generalization benchmarks such as Winoground. We find that a critical component lacking from current vision-language models is relation-level alignment: the ability to match directional semantic relations in text (e.g., "mug in grass") with spatial relationships in the image (e.g., the position of the mug relative to the grass). To tackle this problem, we show that relation alignment can be enforced by encouraging the directed language attention from 'mug' to 'grass' (capturing the semantic relation 'in') to match the directed visual attention from the mug to the grass. Tokens and their corresponding objects are softly identified using the cross-modal attention. We prove that this notion of soft relation alignment is equivalent to enforcing congruence between vision and language attention matrices under a 'change of basis' provided by the cross-modal attention matrix. Intuitively, our approach projects visual attention into the language attention space to calculate its divergence from the actual language attention, and vice versa. We apply our Cross-modal Attention Congruence Regularization (CACR) loss to UNITER and improve on the state-of-the-art approach to Winoground.
translated by 谷歌翻译